Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Gimbal system control algorithm of unmanned aerial vehicle based on extended state observer
Huzhen GAO, Changping DU, Yao ZHENG
Journal of Computer Applications    2024, 44 (2): 604-610.   DOI: 10.11772/j.issn.1001-9081.2023020241
Abstract93)   HTML7)    PDF (3518KB)(61)       Save

To handle the problem of variable coupling in Unmanned Aerial Vehicle (UAV) three-axis gimbal stabilization control, an UAV gimbal system control algorithm based on Extended State Observer (ESO) was proposed. Firstly, an attitude solution algorithm model for the desired angle of the UAV gimbal was developed. Secondly, serial PID (Proportional-Integral-Derivative) control loops of position and velocity were constructed. Finally, an ESO was introduced to estimate the angular velocity term online in real-time, which solves the problem that the angular velocity term is difficult to measure directly due to high coupling and multiple external disturbances, and the control input of each channel was compensated. The experimental results show that in scenarios including without command, with command, and composite tasks, the root mean square errors of the proposed algorithm for angle measurement are 0.235 7°, 0.631 7°, and 0.946 3°, respectively. Compared to the traditional PID algorithm, the proposed algorithm achieves angle error reduction rates of 69.43%, 53.29%, and 50.43%, respectively. The proposed algorithm exhibits greater resistance to disturbances and higher control accuracy.

Table and Figures | Reference | Related Articles | Metrics
Trajectory control of quadrotor based on reinforcement learning-iterative learning
Xuguang LIU, Changping DU, Yao ZHENG
Journal of Computer Applications    2022, 42 (12): 3950-3956.   DOI: 10.11772/j.issn.1001-9081.2021101814
Abstract292)   HTML6)    PDF (1647KB)(144)       Save

In order to further improve the trajectory tracking accuracy of quadrotor in unknown environment, a control method adding an iterative learning feedforward controller to the traditional feedback control architecture was proposed. Facing the difficulty of tuning learning parameters in the process of Iterative Learning Control (ILC), a method of tuning and optimizing learning parameters of iterative learning controllers using Reinforcement Learning (RL) was proposed. Firstly, RL was used to optimize the learning parameters of iterative learning controller, and the optimal learning parameters under the current environment and tasks were filtered out to ensure the optimal control effect of the iterative learning controller. Then, with the learning ability of iterative learning controllers, the feedforward input was optimized iteratively until the perfect tracking was achieved. Finally, in the simulation environment with random noise, experiments were carried out to compare the proposed Reinforcement Learning-Iterative Learning Control (RL-ILC) algorithm with ILC method without optimizing parameters, Sliding Mode Control (SMC) method and Proportional-Integral-Derivative (PID) control method. Experimental results show that after two iterations, the proposed algorithm has the total error reduced to 0.2% of the initial error, achieving rapid convergence. Compared with SMC method and PID control method, RL-ILC algorithm is not affected by noise and does not produce trajectory fluctuations after algorithm convergence. The results illustrate that the proposed algorithm can effectively improve the trajectory tracking task’s accuracy and robustness.

Table and Figures | Reference | Related Articles | Metrics